Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Database (Oxford) ; 20242024 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-38554132

RESUMEN

In this report, we analyse the use of virtual reality (VR) as a method to navigate and explore complex knowledge graphs. Over the past few decades, linked data technologies [Resource Description Framework (RDF) and Web Ontology Language (OWL)] have shown to be valuable to encode such graphs and many tools have emerged to interactively visualize RDF. However, as knowledge graphs get larger, most of these tools struggle with the limitations of 2D screens or 3D projections. Therefore, in this paper, we evaluate the use of VR to visually explore SPARQL Protocol and RDF Query Language (SPARQL) (construct) queries, including a series of tutorial videos that demonstrate the power of VR (see Graph2VR tutorial playlist: https://www.youtube.com/playlist?list=PLRQCsKSUyhNIdUzBNRTmE-_JmuiOEZbdH). We first review existing methods for Linked Data visualization and then report the creation of a prototype, Graph2VR. Finally, we report a first evaluation of the use of VR for exploring linked data graphs. Our results show that most participants enjoyed testing Graph2VR and found it to be a useful tool for graph exploration and data discovery. The usability study also provides valuable insights for potential future improvements to Linked Data visualization in VR.


Asunto(s)
Web Semántica , Realidad Virtual , Humanos , Bases de Datos Factuales , Lenguaje
2.
Eur J Hum Genet ; 32(1): 69-76, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37322132

RESUMEN

The coming-into-force of the EU General Data Protection Regulation (GDPR) is a watershed moment in the legal recognition of enforceable rights to informational self-determination. The rapid evolution of legal requirements applicable to data use, however, has the potential to outstrip the capabilities of networks of biomedical data users to respond to the shifting norms. It can also delegitimate established institutional bodies that are responsible for assessing and authorising the downstream use of data, including research ethics committees and institutional data custodians. These burdens are especially pronounced for clinical and research networks that are of transnational scale, because the legal compliance burden for outbound international data transfers from the EEA is especially high. Legislatures, courts, and regulators in the EU should therefore implement the following three legal changes. First, the responsibilities of particular actors in a data sharing network should be delimited through the contractual allocation of responsibilities between collaborators. Second, the use of data through secure data processing environments should not trigger the international transfer provisions of the GDPR. Third, the use of federated data analysis methodologies that do not provide analysis nodes or downstream users access to identifiable personal data as part of the outputs of those analyses should not be considered circumstances of joint controllership, nor lead to the users of non-identifiable data to be considered controllers or processors. These small clarifications of, or modifications to, the GDPR would facilitate the exchange of biomedical data amongst clinicians and researchers.


Asunto(s)
Seguridad Computacional , Seguridad Computacional/legislación & jurisprudencia , Unión Europea
4.
PLoS Med ; 20(1): e1004036, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36701266

RESUMEN

BACKGROUND: Preterm birth is the leading cause of perinatal morbidity and mortality and is associated with adverse developmental and long-term health outcomes, including several cardiometabolic risk factors and outcomes. However, evidence about the association of preterm birth with later body size derives mainly from studies using birth weight as a proxy of prematurity rather than an actual length of gestation. We investigated the association of gestational age (GA) at birth with body size from infancy through adolescence. METHODS AND FINDINGS: We conducted a two-stage individual participant data (IPD) meta-analysis using data from 253,810 mother-child dyads from 16 general population-based cohort studies in Europe (Denmark, Finland, France, Italy, Norway, Portugal, Spain, the Netherlands, United Kingdom), North America (Canada), and Australasia (Australia) to estimate the association of GA with body mass index (BMI) and overweight (including obesity) adjusted for the following maternal characteristics as potential confounders: education, height, prepregnancy BMI, ethnic background, parity, smoking during pregnancy, age at child's birth, gestational diabetes and hypertension, and preeclampsia. Pregnancy and birth cohort studies from the LifeCycle and the EUCAN-Connect projects were invited and were eligible for inclusion if they had information on GA and minimum one measurement of BMI between infancy and adolescence. Using a federated analytical tool (DataSHIELD), we fitted linear and logistic regression models in each cohort separately with a complete-case approach and combined the regression estimates and standard errors through random-effects study-level meta-analysis providing an overall effect estimate at early infancy (>0.0 to 0.5 years), late infancy (>0.5 to 2.0 years), early childhood (>2.0 to 5.0 years), mid-childhood (>5.0 to 9.0 years), late childhood (>9.0 to 14.0 years), and adolescence (>14.0 to 19.0 years). GA was positively associated with BMI in the first decade of life, with the greatest increase in mean BMI z-score during early infancy (0.02, 95% confidence interval (CI): 0.00; 0.05, p < 0.05) per week of increase in GA, while in adolescence, preterm individuals reached similar levels of BMI (0.00, 95% CI: -0.01; 0.01, p 0.9) as term counterparts. The association between GA and overweight revealed a similar pattern of association with an increase in odds ratio (OR) of overweight from late infancy through mid-childhood (OR 1.01 to 1.02) per week increase in GA. By adolescence, however, GA was slightly negatively associated with the risk of overweight (OR 0.98 [95% CI: 0.97; 1.00], p 0.1) per week of increase in GA. Although based on only four cohorts (n = 32,089) that reached the age of adolescence, data suggest that individuals born very preterm may be at increased odds of overweight (OR 1.46 [95% CI: 1.03; 2.08], p < 0.05) compared with term counterparts. Findings were consistent across cohorts and sensitivity analyses despite considerable heterogeneity in cohort characteristics. However, residual confounding may be a limitation in this study, while findings may be less generalisable to settings in low- and middle-income countries. CONCLUSIONS: This study based on data from infancy through adolescence from 16 cohort studies found that GA may be important for body size in infancy, but the strength of association attenuates consistently with age. By adolescence, preterm individuals have on average a similar mean BMI to peers born at term.


Asunto(s)
Sobrepeso , Nacimiento Prematuro , Niño , Embarazo , Femenino , Humanos , Recién Nacido , Lactante , Preescolar , Adolescente , Sobrepeso/epidemiología , Sobrepeso/complicaciones , Edad Gestacional , Factores de Riesgo , Nacimiento Prematuro/epidemiología , Estudios de Cohortes , Peso al Nacer , Índice de Masa Corporal
5.
J Dev Orig Health Dis ; 14(2): 190-198, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-35957574

RESUMEN

Optimizing research on the developmental origins of health and disease (DOHaD) involves implementing initiatives maximizing the use of the available cohort study data; achieving sufficient statistical power to support subgroup analysis; and using participant data presenting adequate follow-up and exposure heterogeneity. It also involves being able to undertake comparison, cross-validation, or replication across data sets. To answer these requirements, cohort study data need to be findable, accessible, interoperable, and reusable (FAIR), and more particularly, it often needs to be harmonized. Harmonization is required to achieve or improve comparability of the putatively equivalent measures collected by different studies on different individuals. Although the characteristics of the research initiatives generating and using harmonized data vary extensively, all are confronted by similar issues. Having to collate, understand, process, host, and co-analyze data from individual cohort studies is particularly challenging. The scientific success and timely management of projects can be facilitated by an ensemble of factors. The current document provides an overview of the 'life course' of research projects requiring harmonization of existing data and highlights key elements to be considered from the inception to the end of the project.


Asunto(s)
Proyectos de Investigación , Humanos , Estudios de Cohortes , Estudios Retrospectivos
6.
Int J Epidemiol ; 49(4): 1067-1074, 2020 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-32617581

RESUMEN

Good data curation is integral to cohort studies, but it is not always done to a level necessary to ensure the longevity of the data a study holds. In this opinion paper, we introduce the concept of data curation debt-the data curation equivalent to the software engineering principle of technical debt. Using the context of UK cohort studies, we define data curation debt-describing examples and their potential impact. We highlight that accruing this debt can make it more difficult to use the data in the future. Additionally, the long-running nature of cohort studies means that interest is accrued on this debt and compounded over time-increasing the impact a debt could have on a study and its stakeholders. Primary causes of data curation debt are discussed across three categories: longevity of hardware, software and data formats; funding; and skills shortages. Based on cross-domain best practice, strategies to reduce the debt and preventive measures are proposed-with importance given to the recognition and transparent reporting of data curation debt. Describing the debt in this way, we encapsulate a multi-faceted issue in simple terms understandable by all cohort study stakeholders. Data curation debt is not only confined to the UK, but is an issue the international community must be aware of and address. This paper aims to stimulate a discussion between cohort studies and their stakeholders on how to address the issue of data curation debt. If data curation debt is left unchecked it could become impossible to use highly valued cohort study data, and ultimately represents an existential risk to studies themselves.


Asunto(s)
Curaduría de Datos , Programas Informáticos , Estudios de Cohortes , Humanos
7.
F1000Res ; 9: 1095, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-34026049

RESUMEN

Cohort studies collect, generate and distribute data over long periods of time - often over the lifecourse of their participants. It is common for these studies to host a list of publications (which can number many thousands) on their website to demonstrate the impact of the study and facilitate the search of existing research to which the study data has contributed. The ability to search and explore these publication lists varies greatly between studies. We believe a lack of rich search and exploration functionality of study publications is a barrier to entry for new or prospective users of a study's data, since it may be difficult to find and evaluate previous work in a given area. These lists of publications are also typically manually curated, resulting in a lack of rich metadata to analyse, making bibliometric analysis difficult. We present here a software pipeline that aggregates metadata from a variety of third-party providers to power a web based search and exploration tool for lists of publications. Alongside core publication metadata (i.e. author lists, keywords etc.), we include geocoding of first authors and citation counts in our pipeline. This allows a characterisation of a study as a whole based on common locations of authors, frequency of keywords, citation profile etc. This enriched publications metadata can be useful for generating study impact metrics and web-based graphics for public dissemination. In addition, the pipeline produces a research data set for bibliometric analysis or social studies of science. We use a previously published list of publications from a cohort study as an exemplar input data set to show the output and utility of the pipeline here.


Asunto(s)
Metadatos , Publicaciones , Bibliometría , Estudios de Cohortes , Humanos , Estudios Prospectivos
8.
Wellcome Open Res ; 2: 74, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28989981

RESUMEN

Three synthetic datasets - of observation size 15,000, 155,000 and 1,555,000 participants, respectively - were created by simulating eleven cardiac and anthropometric variables from nine collection ages of the ALSAPC birth cohort study. The synthetic datasets retain similar data properties to the ALSPAC study data they are simulated from (co-variance matrices, as well as the mean and variance values of the variables) without including the original data itself or disclosing participant information.  In this instance, the three synthetic datasets have been utilised in an academia-industry collaboration to build a prototype virtual reality data analysis software, but they could have a broader use in method and software development projects where sensitive data cannot be freely shared.

9.
BMC Med Ethics ; 18(1): 24, 2017 04 04.
Artículo en Inglés | MEDLINE | ID: mdl-28376776

RESUMEN

BACKGROUND: Because no single person or group holds knowledge about all aspects of research, mechanisms are needed to support knowledge exchange and engagement. Expertise in the research setting necessarily includes scientific and methodological expertise, but also expertise gained through the experience of participating in research and/or being a recipient of research outcomes (as a patient or member of the public). Engagement is, by its nature, reciprocal and relational: the process of engaging research participants, patients, citizens and others (the many 'publics' of engagement) brings them closer to the research but also brings the research closer to them. When translating research into practice, engaging the public and other stakeholders is explicitly intended to make the outcomes of translation relevant to its constituency of users. METHODS: In practice, engagement faces numerous challenges and is often time-consuming, expensive and 'thorny' work. We explore the epistemic and ontological considerations and implications of four common critiques of engagement methodologies that contest: representativeness, communication and articulation, impacts and outcome, and democracy. The ECOUTER (Employing COnceptUal schema for policy and Translation Engagement in Research) methodology addresses problems of representation and epistemic foundationalism using a methodology that asks, "How could it be otherwise?" ECOUTER affords the possibility of engagement where spatial and temporal constraints are present, relying on saturation as a method of 'keeping open' the possible considerations that might emerge and including reflexive use of qualitative analytic methods. RESULTS: This paper describes the ECOUTER process, focusing on one worked example and detailing lessons learned from four other pilots. ECOUTER uses mind-mapping techniques to 'open up' engagement, iteratively and organically. ECOUTER aims to balance the breadth, accessibility and user-determination of the scope of engagement. An ECOUTER exercise comprises four stages: (1) engagement and knowledge exchange; (2) analysis of mindmap contributions; (3) development of a conceptual schema (i.e. a map of concepts and their relationship); and (4) feedback, refinement and development of recommendations. CONCLUSION: ECOUTER refuses fixed truths but also refuses a fixed nature. Its promise lies in its flexibility, adaptability and openness. ECOUTER will be formed and re-formed by the needs and creativity of those who use it.


Asunto(s)
Comunicación , Participación de la Comunidad , Proyectos de Investigación , Investigación Biomédica Traslacional , Humanos
10.
F1000Res ; 5: 1307, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27366320

RESUMEN

ECOUTER ( Employing COncept ual schema for policy and Translation E in Research - French for 'to listen' - is a new stakeholder engagement method incorporating existing evidence to help participants draw upon their own knowledge of cognate issues and interact on a topic of shared concern. The results of an ECOUTER can form the basis of recommendations for research, governance, practice and/or policy. This paper describes the development of a digital methodology for the ECOUTER engagement process based on currently available mind mapping freeware software. The implementation of an ECOUTER process tailored to applications within health studies are outlined for both online and face-to-face scenarios. Limitations of the present digital methodology are discussed, highlighting the requirement of a purpose built software for ECOUTER research purposes.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...